13 research outputs found

    Network-on-Chip -based Multi-Processor System-on-Chip: Towards Mixed-Criticality System Certification

    Get PDF
    L'abstract è presente nell'allegato / the abstract is in the attachmen

    Efficient Software-Based Partitioning for Commercial-off-the-Shelf NoC-based MPSoCs for Mixed-Criticality Systems

    No full text
    Some industrial domains, characterized by particularly strict safety standards – e.g., avionics – are facing issues when addressing the usage of commercial-off-the-shelf multi-processor system-on-chips and in particular new network- on-chip-based architectures. One key issue is related to the usage of such system-on-chips to implement mixed-criticality systems with the main goal of reducing size, weight, and power consumption of on-board equipment by reducing the number of computers moving from federated architectures based on single- core processors to a single multi-core processor. To comply with relevant safety standards, a mixed-criticality system should be proven to enforce isolation among safety-critical and non-safety- critical tasks running on the multi-core hardware platform. This paper presents a software-level methodology tackling such issue. The proposed methodology exploits knowledge of the deterministic routing algorithm used by the network-on-chip to implement a safe and efficient partitioning of the system. This paper presents the software implementation and an experimental evaluation of the solution, proving its suitability for integration in an avionic application

    On the consolidation of mixed criticalities applications on multicore architectures

    No full text
    Multicore architectures are very appealing as they offer the capability of integrating federated architectures, where multiple independent computing elements are devoted to specific tasks, into a single device, allowing significant mass and power savings. Often, the tasks in the federated architectures are responsible for mixed criticalities tasks, i.e. some of them are mission-/safety-critical real-time tasks, while others are non-critical tasks. When consolidating mixed criticalities tasks on multicore architectures, designers must guarantee that each core does not interfere with the others, introducing side effects not possible in federated architectures. In this paper we propose a hybrid solution based on a combination of known techniques: lightweight hardware redundancy, implemented using smart watchdogs and voter logic, cooperates with software redundancy, implemented using software temporal triple module redundancy for those tasks with low criticality and no real-time requirement, and software triple module redundancy for tasks with high criticality and real-time requirement. To guarantee lack of interference, a hypervisor is used to segregate the execution of each task in a dedicated resource partition. Preliminary experimental results are reported on a prototypical vision-based navigation system

    A High-Level Approach to Analyze the Effects of Soft Errors on Lossless Compression Algorithms

    No full text
    In space applications, the data logging sub-system often requires compression to cope with large amounts of data as well as with limited storage and communication capabilities. The usage of Commercial off-the-Shelf (COTS) hardware components is becoming more common, since they are particularly suitable to meet high performance requirements and also to cut the cost with respect to space qualified ones. On the other side, given the characteristics of the space environment, the usage of COTS components makes radiation-induced soft errors highly probable. The purpose of this work is to analyze a set of lossless compression algorithms in order to compare their robustness against soft errors. The proposed approach works on the unhardened version of the programs, aiming to estimate their intrinsic robustness. The main contribution of the work lies in the investigation of the possibility of performing an early comparison between different compression algorithms at a high level, by only considering their data structures (corresponding to program variables). This approach is virtually agnostic of the downstream implementation details. This means that the proposed approach aims to perform a comparison (in terms of robustness against soft errors) between the considered programs before the final computing platform is defined. The results of the high-level analysis can also be used to collect useful information to optimize the hardening phase. Experimental results based on the OpenRISC processor are reported. They suggest that when properly adopted, the proposed approach makes it possible to perform a comparison between a set of compression algorithms, even with a very limited knowledge of the target computing system

    Analysis of the Effects of soft Errors on Compression Algorithms through Fault Injection inside Program Variables

    No full text
    Data logging applications, such as those deployed in satellite launchers to acquire telemetry data, may require compression algorithms to cope with large amounts of data as well as limited storage and communication capabilities. When commercial-off-the-shelf hardware components are used to implement such applications, radiation-induced soft errors may occur, especially during the last stages of the launcher cruise, potentially affecting the algorithm execution. The purpose of this work is to analyze two compression algorithms using fault injection to evaluate their robustness against soft errors. The main contribution of the work is the analysis of the compression algorithm susceptibility by attacking their data structures (also referred as program variables) rather than the memory elements of the computing platform in charge of the algorithm execution. This approach is agnostic of the downstream implementation details. Instead, the intrinsic robustness of compression algorithms can be evaluated quickly, and system-level decisions can be taken before the computing platform is finalize

    Towards Making Fault Injection on Abstract Models a More Accurate Tool for Predicting RT-Level Effects

    No full text
    Fault injection and fault simulation are a typical approach to analyze the effect of a fault on a hardware/software system. Often fault injection is done on abstract models of the system either to retrieve early results when no implementation is available, yet, or to speed-up the runtime intensive fault simulation on detailed models. The simulation results from the abstract model are typically inaccurate because details of the concrete hardware are missing. Here, we propose an approach to relate faults from an abstract untimed algorithmic model to their counterparts in the concrete register transfer models. This allows to understand which faults are covered on the concrete model and to speed up the fault simulation process. We use a mapping between both models' variables and mapped timing states for fault injection to corresponding variables on both models. After fault simulations the results are compared to check, whether a given fault produces the same behavior on both models. The results show that an injected fault to corresponding variables leads to the same behavior of both models for a large share of faults

    On the robustness of DCT-based compression algorithms for space applications

    Get PDF
    High compression ratio is crucial to cope with the large amounts of data produced by telemetry sensors and the limited transmission bandwidth typical of space applications. A new generation of telemetry units is under development, based on Commercial Off-The-Shelf (COTS) components that may be subject to misbehaviors due to radiation-induced soft errors. The purpose of this paper is to study the impact of soft errors on different configurations of a discrete cosine transform (DCT)-based compression algorithm. This work's main contribution lies in providing some design guideline
    corecore